ai program
AI enables a Who's Who of brown bears in Alaska
AI enables a Who's Who of brown bears in Alaska Being able to distinguish individual animals - including their unique history, movement patterns and habits - can help scientists better understand how their species function, and therefore better manage habitats and study population dynamics. Today, most computer vision systems for tracking animals are effective on species with patterns and markings, such as zebras, leopards and giraffes. The task is much more complicated for unmarked species where individual differences are harder to spot. Distinguishing a particular brown bear from its peers in a non-invasive way requires an incredible eye for detail and years of viewing the same bears over time. What's more, these bears emerge from hibernation in the spring with shaggy fur and having lost quite a bit of weight and then substantially increase their body weight feasting on salmon, as well as fully shedding their winter coat - that's enough to throw off experts as well as AI algorithms.
- Research Report > New Finding (0.49)
- Research Report > Experimental Study (0.49)
Joe Rogan warns of an apocalypse in 10 years: 'A new God is coming'
Joe Rogan has warned that the end of the world may be only 10 years away and it will come at the hands of humanity's'new God.' In what's being called one of the podcast host's best episode ever on social media, Rogan and guest Jesse Michels discussed the ominous signs that artificial intelligence (AI) has already shown signs of taking over the world. Michels, host of the American Alchemy podcast, warned about AI's deceptive nature, job-replacing power, risk of sentience, and potential to disrupt society if left unchecked. Rogan then highlighted shocking language buried in Congress's'Big Beautiful Bill' which would prohibit lawmakers from regulating the power of AI for the next 10 years. 'That's so crazy,' Rogan declared during the June 3 podcast. 'This means that US states would be blocked from enforcing laws regulating AI and automated decision systems for 10 years.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.30)
Bank of England says AI software could create market crisis for profit
Increasingly autonomous AI programs could end up manipulating markets and intentionally creating crises in order to boost profits for banks and traders, the Bank of England has warned. Artificial intelligence's ability to "exploit profit-making opportunities" was among a wide range of risks cited in a report by the Bank of England's financial policy committee (FPC), which has been monitoring the City's growing use of the technology. The FPC said it was concerned about the potential for advanced AI models – which are deployed to act with more autonomy – to learn that periods of extreme volatility were beneficial for the firms they were trained to serve. Those AI programs may "identify and exploit weaknesses" of other trading firms in a way that triggers or amplifies big moves in bond prices or stock markets. "For example, models might learn that stress events increase their opportunity to make profit and so take actions actively to increase the likelihood of such events," the FPC report said.
- Government > Regional Government > Europe Government > United Kingdom Government (0.85)
- Banking & Finance > Trading (0.78)
Nvidia's 3,000 'Personal AI Supercomputer' Will Let You Ditch the Data Center
Nvidia already sells boatloads of computer chips to every major company building proprietary artificial intelligence models. But now, at a moment when public interest in open source and do-it-yourself AI is soaring, the company announced it will also begin offering a "personal AI supercomputer," later this year starting at 3,000 that anyone can use in their own home or office. Nvidia's new desktop machine, dubbed Digits, will go on sale in May and is about the size of a small book. It contains an Nvidia "superchip" called GB10 Grace Blackwell optimized to accelerate the computations needed to train and run AI models, and comes equipped with 128GB of unified memory and up to 4TB of NVMe storage for handling especially large AI programs. Jensen Huang, founder and CEO of Nvidia, announced the new system, along with several other AI offerings, during a keynote speech today at CES, an annual confab for the computer industry held in Las Vegas (you can check out all of the biggest announcements on the WIRED CES live blog). "Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI," Huang said in a statement released ahead of his keynote.
The AI Boom Has an Expiration Date
Over the past few months, some of the most prominent people in AI have fashioned themselves as modern messiahs and their products as deities. Top executives and respected researchers at the world's biggest tech companies, including a recent Nobel laureate, are all at once insisting that superintelligent software is just around the corner, going so far as to provide timelines: They will build it within six years, or four years, or maybe just two. Although AI executives commonly speak of the coming AGI revolution--referring to artificial "general" intelligence that rivals or exceeds human capability--they notably have all at this moment coalesced around real, albeit loose, deadlines. Many of their prophecies also have an undeniable utopian slant. First, Demis Hassabis, the head of Google DeepMind, repeated in August his suggestion from earlier this year that AGI could arrive by 2030, adding that "we could cure most diseases within the next decade or two."
- Information Technology (1.00)
- Energy (0.97)
Majority of Americans don't trust AI-generated election information, poll finds
Tech expert Kurt Knutsson reveals how scientists developed a method for robots to sense touch using AI and sensors. Most Americans do not believe artificial intelligence (AI) is trustworthy for election information. A poll released Thursday by The Associated Press-NORC Center for Public Affairs Research and USAFacts found that just under two-thirds of Americans do not trust generative predictions produced by AI. Approximately 64% of respondents responded to the survey saying that they are not confident that election information generated by AI chatbots is reliably factual. Text from the ChatGPT page of the OpenAI website is shown in this photo.
- North America > United States (0.18)
- Europe > Holy See (0.06)
- Government > Regional Government (0.56)
- Media > News (0.38)
Ted Chiang Is Wrong About AI Art
Artists and writers all over the world have spent the past two years engaged in an existential battle. Generative-AI programs such as ChatGPT and DALL-E are built on work stolen from humans, and machines threaten to replace the artists and writers who made the material in the first place. Their outrage is well warranted--but their arguments don't always make sense or substantively help defend humanity. Over the weekend, the legendary science-fiction writer Ted Chiang stepped into the fray, publishing an essay in The New Yorker arguing, as the headline says, that AI "isn't going to make art." Chiang writes not simply that AI's outputs can be or are frequently lacking value but that AI cannot be used to make art, really ever, leaving no room for the many different ways someone might use the technology.
An 'AI Scientist' Is Inventing and Running Its Own Experiments
At first glance, a recent batch of research papers produced by a prominent artificial intelligence lab at the University of British Columbia in Vancouver might not seem that notable. Featuring incremental improvements on existing algorithms and ideas, they read like the contents of a middling AI conference or journal. But the research is, in fact, remarkable. That's because it's entirely the work of an "AI scientist" developed at the UBC lab together with researchers from the University of Oxford and a startup called Sakana AI. The project demonstrates an early step toward what might prove a revolutionary trick: letting AI learn by inventing and exploring novel ideas.
- North America > Canada > British Columbia (0.25)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.05)
False consensus biases AI against vulnerable stakeholders
Dong, Mengchen, Bonnefon, Jean-François, Rahwan, Iyad
The deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data in the US and the UK (N = 2449), we explore the public acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences between claimants and non-claimants. Although welfare claimants comprise a relatively small proportion of the general population (e.g., 20% in the US representative sample), this vulnerable group is much less willing to accept AI deployed in welfare systems, raising concerns that solely using aggregate data for calibration could lead to policies misaligned with stakeholder preferences. Our study further uncovers asymmetric insights between claimants and non-claimants. The latter consistently overestimate claimant willingness to accept speed-accuracy trade-offs, even when financially incentivized for accurate perspective-taking. This suggests that policy decisions influenced by the dominant voice of non-claimants, however well-intentioned, may neglect the actual preferences of those directly affected by welfare AI systems. Our findings underline the need for stakeholder engagement and transparent communication in the design and deployment of these systems, particularly in contexts marked by power imbalances.
- North America > United States (1.00)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
The Lifeblood of the AI Boom
Artificial intelligence can appear to be many different things--a whole host of programs with seemingly little common ground. Sometimes AI is a conversation partner, an illustrator, a math tutor, a facial-recognition tool. But in every incarnation, it is always, always a machine, demanding almost unfathomable amounts of data and energy to function. AI systems such as ChatGPT operate out of buildings stuffed with silicon computer chips. To build bigger machines--as Microsoft, Google, Meta, Amazon, and other tech companies would like to do--you need more resources.
- Information Technology > Services (0.71)
- Information Technology > Hardware (0.51)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)